Search Results: "Martin Pitt"

10 November 2014

Neil Williams: On getting NEW packages into stable

There s a lot of discussion / moaning /arguing at this time, so I thought I d post something about how LAVA got into Debian Jessie, the work involved and the lessons I ve learnt. Hopefully, it will help someone avoid the disappointment of having their package missing the migration into a future stable release. This was going to be a talk at the Minidebconf-uk in Cambridge but I decided to put this out as a permanent blog entry in the hope that it will be a useful reference for the future, not just Jessie. Context LAVA relies on a number of dependencies which were at the time all this started NEW to Debian as well as many others already in Debian. I d been running LAVA using packages on my own system for a few months before the packages were ready for use on the main servers (I never actually installed LAVA using the old virtualenv method on my own systems, except in a VM). I did do quite a lot of this on my own but I also had a team supporting the effort and valuing the benefits of moving to a packaged system. At the time, LAVA was based on Ubuntu (12.04 LTS Precise Pangolin) and a new Ubuntu LTS was close (Trusty Tahr 14.04) but I started work on this in 2013. By the time my packages were ready for general usage, it was winter 2013 and much too close to get anything into Ubuntu in time for Trusty. So I started a local repo using space provided by Linaro. At the same time, I started uploading the dependencies to Debian. json-schema-validator, django-testscenarios and others arrived in April and May 2014. (Trusty was released in April). LAVA arrived in NEW in May, being accepted into unstable at the end of June. LAVA arrived in testing for the first time in July 2014. Upstream development continued apace and a regular monthly upload, with some hotfixes in between, continued until close to the freeze. At this point, note that although upstream is a medium sized team, the Debian packaging also has a team but all the uploads were made by me. I planned ahead. I knew that I would be going to Macau for Linaro Connect in February a critical stage in the finalisation of the packages and the migration of existing instances from the old methods. I knew that I would be on vacation from August through to the end of September 2014 including at least two weeks with absolutely no connectivity of any kind. Right at this time, Django1.7 arrived in experimental with the intent to go into unstable and hence into Jessie. This was a headache for me, I initially sought to delay the migration until after Jessie. However, we discussed it upstream, allocated time within the busy schedule and also sought help from within Debian with the RFH tag. Rapha l Hertzog contributed patches for django1.7 support and we worked on those patches upstream, once I was back from vacation. (The final week of my vacation was a work conference, so we had everyone together at one hacking table.) Still there was more to do, the django1.7 patches allowed the unit tests to complete but broke other parts of the lava-server package and needed subsequent tweaks and fixes. Even with all this, the auto-removal from testing for packages affected by RC bugs in their dependencies became very important to monitor (it still is). It would be useful if some packages had less complex dependency chains (I m looking at you, uwsgi) as the auto-removal also covers build-depends. This led to some more headaches with libmatheval. I m not good with functional programming languages, I did have some exposure to Scheme when working on Gnucash upstream but it wasn t pleasant. The thought of fixing a scheme problem in the test suite of libmatheval was daunting. Again though, asking for help, I found people in the upstream team who wanted to refresh their use of scheme and were able to help out. The fix migrated into testing in October. Just for added complications, lava-server gained a few RC bugs of it s own during that time too fixed upstream but awkward nonetheless. Achievement unlocked So that s how a complex package like lava-server gets into stable. With a lot of help. The main problem with top-level packages like this is the sheer weight of the dependency chain. Something seemingly unrelated (like libmatheval) can seriously derail the migrations. The package doesn t use the matheval support provided by uwsgi. The bug in matheval wasn t in the parts of matheval used by uwsgi. It wasn t in a language I am at all comfortable in fixing but it s my name on the changelog of the NMU. That happened because I asked for help. OK, when django1.7 was scheduled to arrive in Debian unstable and I knew that lava was not ready, I reacted out of fear and anxiety. However, I sought help, help was provided and that help was enough to get upstream to a point where the side-effects of the required changes could be fixed. Maintaining a top-level package in Debian is becoming more like maintaining a core package in Debian and that is a good thing. When your package has a lot of dependencies, those dependencies become part of the maintenance workload of your package. It doesn t matter if those are install time dependencies, build dependencies or reverse dependencies. It doesn t actually matter if the issues in those packages are in languages you would personally wish to be expunged from the archive. It becomes your problem but not yours alone. Debian has a lot of flames right now and Enrico encouraged us to look at what else is actually happening in Debian besides those arguments. Well, on top of all this with lava, I also did what I could to help the arm64 port along and I m very happy that this has been accepted into Jessie as an official release architecture. That s a much bigger story than LAVA yet LAVA was and remains instrumental in how arm64 gained the support in the kernel and various upstreams which allowed patches to be accepted and fixes to be incorporated into Debian packages. So a roll call of helpers who may otherwise not have been recognised via changelogs, in no particular order: Also general thanks to the Debian FTP and Release teams. Lessons learnt
  1. Allow time! None of the deadlines or timings involved in this entire process were hidden or unexpected. NEW always takes a finite but fairly lengthy amount of time but that was the only timeframe with any amount of uncertainty. That is actually a benefit it reminds you that this entire process is going to take a significant amount of time and the only loser if you try to rush it is going to be you and your package. Plan for the time and be sceptical about how much time is actually required.
  2. Ask for help! Everyone in Debian is a volunteer. Yes, the upstream for this project is a team of developers paid to work on this code (and largely only this code) but the upstream also has priorities, requirements, objectives and deadlines. It s no good expecting upstream to do everything. It s no good leaving upstream insufficient time to fit the required work into the existing upstream schedules. So ask for help within upstream and within Debian ask for help wherever you can. You don t know who may be able to help you until you ask. Be clear when asking for help how would someone test their proposed fix? Exactly what are you asking for help doing? (Hint: everything is not a good answer.)
  3. Keep on top of announcements and changes. The release team in Debian have made the timetable strict and have published regular updates, guidelines and status notes. As maintainer, it is your responsibility to keep up with those changes and make others in the upstream team aware of the changes and the implications. Upstream will rely on you to provide accurate information about these requirements. This is almost more important than actually providing the uploads or fixes. Without keeping people informed, even asking for help can turn out to be counter-productive. Communicate within Debian too talk to the teams, send status updates to bugs (even if the status is tag 123456 + help).
  4. Be realistic! Life happens around us, things change, personal timetables get torn up. Time for voluntary activity can appear and disappear (it tends to disappear far more often than extends, so take that into account too).
  5. Do not expect others to do the work for you asking for help is one thing, leaving the work to others is quite another. No complaining to the release team that they are blocking your work and avoid pleading or arguing when a decision is made. The policies and procedures within Debian are generally clear and there are quite enough arguments without adding more. Read the policies, read the guidelines, watch how other packages and other maintainers are handled and avoid those mistakes. Make it easy for others to help deliver what you want.
  6. Get to know your dependency chain follow the links on the packages.debian.org pages and get a handle on which packages are relevant to your package. Subscribe to the bug pages for some of the more high-risk packages. There are tools to help. rc-alert can help you spot problems with runtime dependencies (you do have your own package installed on a system running unstable if not, get that running NOW). Watching build-dependencies is more difficult, especially build-dependencies of a runtime dependency, so watch the RC bug lists for packages in your dependency chain.
Above all else, remember why you and upstream want the packages in Debian in the first place. Debian is a respected distribution and has an acknowledged reputation for stability and portability. The very qualities that you and your upstream desire from having your package in Debian have direct implications for the amount of work and the amount of time that will be required to get your packages into Debian and keep them there. Having your package in Debian will bring considerable benefits but you will be required to invest a considerable amount of time. It is this contribution which is valuable to Debian and it is this work which will deliver the benefits you seek. Being an expert in the one package is wildly inadequate. Debian is about the system, the whole distribution and sooner or later, you as the maintainer will be absolutely required to handle something which is so far out of your comfort zone it s untrue. The reality is that you are not expected to fix that problem you are expected to handle that problem and that includes seeking and acknowledging the help of others. The story isn t over until release day. Having your package in testing the day before the freeze is one step. It may be a large step, but it is only one. The status of that package still needs monitoring. That long dependency chain can still come back and bite. Don t wait for problems to surprise you. Finally One thing I do ask is that other upstream teams and maintainers think about the dependency chain they are creating. It may sound nice to have bindings for every interpreted language possible when releasing your compiled library but it does not help people using that library. Yes, it is more work releasing the bindings separately because a stable API is going to be needed to allow version 1.2.3 to work alongside 1.2.2 and 1.3.0 or the entire effort is pointless. Consider how your upstream package migrates. Consider how adding yet another build-dependency for an optional component makes things exponentially harder for those who need to rely on that upstream. If it is truly optional, release it separately and keep backwards compatibility on each side. It is more work but in reality, all that is happening is that the work is being transferred from the distribution (where it helps only that one distribution and causes duplication into other distributions) into the upstream (where it helps all distributions). Think carefully about what constitutes core functionality and release the rest separately. Combining bindings for php, ruby, python, java, lua and xslt into a single upstream release tarball is a complete nonsense. It simply means that the package gets blocked from new uploads by the constant churn of being involved in every transition that occurs in the distribution. There is a very real risk that the package will miss a stable release simply by having fingers in too many pies. That hurts not only this upstream but every upstream trying to use any part of your code. Every developer likes to think that people are using and benefiting from their effort. It s not nice to directly harm the interests of other developers trying to use your code. It is not enough for the binary packages to be discrete migrations happen by source package and the released tarball needs to not include the optional bindings. It must be this way because it is the source package which determines whether version 1.2.3 of plugin foo can work with version 1.2.0 of the library as well as with version 1.3.0. Maintainers regularly deal with these issues so talk to your upstream teams and explain why this is important to that particular team. Help other maintainers use your code and help make it easier to make a stable release of Debian. The quicker the freeze & release process becomes, the quicker new upstream versions can be uploaded and backported.

17 October 2014

Martin Pitt: Ramblings from LinuxCon/Plumbers 2014

I m on my way home from D sseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans. Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like fleet which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like project-builder.org which auto-build packages from each commit of projects. Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly Stateless systems, factory reset, and golden images which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu s system-image) is far too limited and static yet, it doesn t extend to desktops/servers/cloud workloads at all. It s also a lot of work to implement this properly, so it s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment. On Plumbers my main occupations were mostly the highly interesting LXC track to see what s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I m the de-facto maintainer now, so I ll need to spend some time soon to review/include/clean up some much requested little features and some fixes. All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P). If you are interested you can also see my raw notes, but beware that there are mostly just scribbling. Now, off to next week s Canonical meeting in Washington, DC!

10 October 2014

Martin Pitt: Running autopkgtests in the cloud

It s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we ll get another ~ 4000 from Perl and Ruby packages. Both Debian s and Ubuntu s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don t scale, and at least Ubuntu s runners need quite a lot of handholding. This needs to stop. To quote Tim The Tool Man Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else s Problem. I recently brushed up autopkgtest s ssh runner and the Nova setup script. Previous versions didn t support revert yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner! While you can run adt-run from your home machine, it s probably better to do it from an autopkgtest controller cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the controller node are quite low you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs. The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines. To address this, I refactored the entire knowledge how to turn a pristine default vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it s also easy to run for building custom cloud images: Essentially, you pick a suitable pristine image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds. Thus something like this should be run daily (pick the base images from nova image-list):
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img
This will generate adt-utopic-i386 and adt-utopic-amd64. Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):
  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart
Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:
$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5
Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack. Finally, let' run the packages from above with using ten VMs in parallel:
  parallel -j 10 ./adt-run-nova -- $(< pkglist)
After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well. Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production. Happy testing!

15 September 2014

Martin Pitt: autopkgtest 3.5: Reboot support, Perl/Ruby implicit tests

Last week s autopkgtest 3.5 release (in Debian sid and Ubuntu Utopic) brings several new features which I d like to announce. Tests that reboot For testing low-level packages like init or the kernel it is sometimes desirable to reboot the testbed in the middle of a test. For example, I added a new boot_and_services systemd autopkgtest which configures grub to boot with systemd as pid 1, reboots, and then checks that the most important services like lightdm, D-BUS, NetworkManager, and cron come up as expected. (This test will be expanded a lot in the future to cover other areas like the journal, logind, etc.) In a testbed which supports rebooting (currently only QEMU) your test will now find an autopkgtest-reboot command which the test calls with an arbitrary marker string. autopkgtest will then reboot the testbed, save/restore any files it needs to (like the tests file tree or previously created artifacts), and then re-run the test with ADT_REBOOT_MARK=mymarker. The new Reboot during a test section in README.package-tests explains this in detail with an example. Implicit test metadata for similar packages The Debian pkg-perl team recently discussed how to add package tests to the ~ 3.000 Perl packages. For most of these the test metadata looks pretty much the same, so they created a new pkg-perl-autopkgtest package which centralizes the logic. autopkgtest 3.5 now supports an implicit debian/tests/control control file to avoid having to modify several thousand packages with exactly the same file. An initial run already looked quite promising, 65% of the packages pass their tests. There will be a few iterations to identify common failures and fix those in pkg-perl-autopkgtest and autopkgtestitself now. There is still some discussion about how implicit test control files go together with the DEP-8 specification, as other runners like sadt do not support them yet. Most probably we ll declare those packages XS-Testsuite: autopkgtest-pkg-perl instead of the usual autopkgtest. In the same vein, Debian s Ruby maintainer (Antonio Terceiro) added implicit test control support for Ruby packages. We haven t done a mass test run with those yet, but their structure will probably look very similar.

2 September 2014

Antonio Terceiro: DebConf 14: Community, Debian CI, Ruby, Redmine, and Noosfero

This time, for personal reasons I wasn t able to attend the full DebConf, which started on the Saturday August 22nd. I arrived at Portland on the Tuesday the 26th by noon, at the 4th of the conference. Even though I would like to arrive earlier, the loss was alleviated by the work of the amazing DebConf video team. I was able to follow remotely most of the sessions I would like to attend if I were there already. As I will say to everyone, DebConf is for sure the best conference I have ever attended. The technical and philosophical discussions that take place in talks, BoF sessions or even unplanned ad-hoc gathering are deep. The hacking moments where you have a chance to pair with fellow developers, with whom you usually only have contact remotely via IRC or email, are precious. That is all great. But definitively, catching up with old friends, and making new ones, is what makes DebConf so special. Your old friends are your old friends, and meeting them again after so much time is always a pleasure. New friendships will already start with a powerful bond, which is being part of the Debian community. Being only 4 hours behind my home time zone, jetlag wasn t a big problem during the day. However, I was waking up too early in the morning and consequently getting tired very early at night, so I mostly didn t go out after hacklabs were closed at 10PM. Despite all of the discussion, being in the audience for several talks, other social interactions and whatnot, during this DebConf I have managed to do quite some useful work. debci and the Debian Continuous Integration project I gave a talk where I discussed past, present, and future of debci and the Debian Continuous Integration project. The slides are available, as well as the video recording. One thing I want you to take away is that there is a difference between debci and the Debian Continuous Integration project: A few days before DebConf, C dric Boutillier managed to extract gem2deb-test-runner from gem2deb, so that autopkgtest tests can be run against any Ruby package that has tests by running gem2deb-test-runner --autopkgtest. gem2deb-test-runner will do the right thing, make sure that the tests don t use code from the source package, but instead run them against the installed package. Then, right after my talk I was glad to discover that the Perl team is also working on a similar tool that will automate running tests for their packages against the installed package. We agreed that they will send me a whitelist of packages in which we could just call that tool and have it do The Right Thing. We might be talking here about getting autopkgtest support (and consequentially continuous integration) for free for almost 2000 4000 packages. The missing bits for this to happen are: During a few days I have mentored Lucas Kanashiro, who also attended DebConf, on writing a patch to add support for email notifications in debci so maintainers can be pro-actively notified of status changes (pass/fail, fail/pass) in their packages. I have also started hacking on the support for distributed workers, based on the initial work by Martin Pitt: Ruby I had some discusion with Christian about making Rubygems install to $HOME by default when the user is not root. We discussed a few implementation options, and while I don t have a solution yet, we have a better understanding of the potential pitfalls. The Ruby BoF session on Friday produced a few interesting discussions. Some take away point include, but are not limited to: Redmine I was able to make Redmine work with the Rails 4 stack we currently have in unstable/testing. This required using a snapshot of the still unreleased version 3.0 based on the rails-4.1 branch in the upstream Subversion repository as source. I am a little nervous about using a upstream snapshot, though. According to the "roadmap of the project ":http://www.redmine.org/projects/redmine/roadmap the only purpose of the 3.0 release will be to upgrade to Rails 4, but before that happens there should be a 2.6.0 release that is also not released yet. 3.0 should be equivalent to that 2.6.0 version both feature-wise and, specially, bug-wise. The only problem is that we don t know what that 2.6.0 looks like yet. According to the roadmap it seems there is not much left in term of features for 2.6.0, though. The updated package is not in unstable yet, but will be soon. It needs more testing, and a good update to the documentation. Those interested in helping to test Redmine on jessie before the freeze please get in touch with me. Noosfero I gave a lighting talk on Noosfero, a platform for social networking websites I am upstream for. It is a Rails appplication licensed under the AGPLv3, and there are packages for wheezy. You can checkout the slides I used. Video recording is not available yet, but should be soon. That s it. I am looking forward to DebConf 15 at Heidelberg. :-)

24 July 2014

Martin Pitt: vim config for Markdown+LaTeX pandoc editing

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin itemize \item...\end itemize with just two keystrokes, it has been good enough for me. A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That s how it should always should have been! So last night I finally sat down and created a vim config for it:
"-- pandoc Markdown+LaTeX -------------------------------------------
function s:MDSettings()
    inoremap <buffer> <Leader>n \note[item] <Esc>i
    noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
    noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
    noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>
    " adjust syntax highlighting for LaTeX parts
    "   inline formulas:
    syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
    "   environments:
    syntax region Statement matchgroup=Delimiter start="\\begin .* " end="\\end .* " contains=Statement
    "   commands:
    syntax region Statement matchgroup=Delimiter start=" " end=" " contains=Statement
endfunction
autocmd BufRead,BufNewFile *.md setfiletype markdown
autocmd FileType markdown :call <SID>MDSettings()
That gives me good enough (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

22 July 2014

Martin Pitt: autopkgtest 3.2: CLI cleanup, shell command tests, click improvements

Yesterday s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of. Cleanup of CLI options, and config files Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this). The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly. Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:
$ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid
$ adt-run libpng @adt_sid
Shell command tests If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:
Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]
This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages. Click improvements It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details. If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module. Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in unpack into temp dir mode anyway). Finally, I made the adb setup script more robust and also faster. As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

1 July 2014

Martin Pitt: deb, click, schroot, LXC, QEMU, phone, cloud: One autopkgtest to Rule Them All!

We currently use completely different methods and tools of building test beds and running tests for Debian vs. Click packages, for normal uploads vs. CI airline landings vs. upstream project merge proposal testing, and keep lots of knowledge about Click package test metadata external and not easily accessible/discoverable. Today I released autopkgtest 3.0 (and 3.0.1 with a few minor updates) which is a major milestone in unifying how we run package tests both locally and in production CI. The goals of this are: So, let s dive into the new features! New runner: adt-virt-ssh We want to run tests on real hardware such as a laptop of a particular brand with a particular graphics card, or an Ubuntu phone. We also want to restructure our current CI machinery to run tests on a real OpenStack cloud and gradually get rid of our hand-maintained QA lab with its test machines. While these use cases seem rather different, they both have in common that there is an already existing machine which is pretty much only accessible with ssh. Once you have an ssh connection, they look pretty much the same, you just need different initial setup (like fiddling with adb, calling nova boot, etc.) to prepare them. So the new adt-virt-ssh runner factorizes all the common bits such as communicating with adt-run, auto-detecting sudo availability, doing SSH connection sharing etc., and delegates the target specific bits to a setup script . E. g. we could specify --setup-script ssh-setup-nova or --setup-script ssh-setup-adb which would then get called with open at the appropriate time by adt-run; it calls the nova commands to create a VM, or run a few adb commands to install/start ssh and install the public key. Then autopkgtest does its thing, and eventually calls the script with cleanup again. The actual protocol is a bit more involved (see manpage), but that s the general idea. autopkgtest now ships readymade scripts for these two use cases. So you could e. g. run the libpng tests in a temporary cloud VM:
# if you don't have one, create it with "nova keypair-create"
$ nova keypair-list
[...]
  pitti   9f:31:cf:78:50:4f:42:04:7a:87:d7:2a:75:5e:46:56  
# find a suitable image
$ nova image-list 
[...]
  ca2e362c-62c9-4c0d-82a6-5d6a37fcb251   Ubuntu Server 14.04 LTS (amd64 20140607.1) - Partner Image                           ACTIVE    
$ nova flavor-list 
[...]
  100   standard.xsmall    1024        10     10                 1       1.0           N/A        
# now run the tests: please be patient, this takes a few mins!
$ adt-run libpng --setup-commands="apt-get update" --- ssh -s /usr/share/autopkgtest/ssh-setup/nova -- \
   -f standard.xsmall -i ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 -k pitti
[...]
adt-run [16:23:16]: test build:  - - - - - - - - - - results - - - - - - - - - -
build                PASS
adt-run: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ tests done.
Please see man adt-virt-ssh for details how to use it and how to write setup scripts. There is also a commented /usr/share/autopkgtest/ssh-setup/SKELETON template for writing your own for your use cases. You can also not use any setup script and just specify user and host name as options, but please remember that the ssh runner cannot clean up after itself, so never use this on important machines which you can t reset/reinstall! Test dependency installation without apt/root Ubuntu phones with system images have a read-only file system where you can t install test dependencies with apt. A similar case is using the null runner without root. When apt-get install is not available, autopkgtest now has a reduced fallback mode: it downloads the required test dependencies, unpacks them into a temporary directory, and runs the tests with $PATH, $PYTHONPATH, $GI_TYPELIB_PATH, etc. pointing to the unpacked temp dir. Of course this only works for packages which are relocatable in that way, i. e. libraries, Python modules, or command line tools; it will totally fail for things which look for config files, plugins etc. in hardcoded directory paths. But it s good enough for the purposes of Click package testing such as installing autopilot, libautopilot-qt etc. Click package support autopkgtest now recognizes click source directories and *.click package arguments, and introduces a new test metadata specification syntax in a click package manifest. This is similar in spirit and capabilities to DEP-8 debian/tests/control, except that it s using JSON:
    "x-test":  
        "unit": "tests/unittests",
        "smoke":  
            "path": "tests/smoketest",
            "depends": ["shunit2", "moreutils"],
            "restrictions": ["allow-stderr"]
         ,
        "another":  
            "command": "echo hello > /tmp/world.txt"
         
     
For convenience, there is also some magic to make running autopilot tests particularly simple. E. g. our existing click packages usually specify something like
    "x-test":  
        "autopilot": "ubuntu_calculator_app"
     
which is enough to do what I mean , i. e. implicitly add the autopilot test depends and run autopilot with the specified test module name. You can specify your own dependencies and/or commands, and restrictions etc., of course. So with this, and the previous support for non-apt test dependencies and the ssh runner, we can put all this together to run the tests for e. g. the Ubuntu calculator app on the phone:
$ bzr branch lp:ubuntu-calculator-app
# built straight from that branch; TODO: where is the official" download URL?
$ wget http://people.canonical.com/~pitti/tmp/com.ubuntu.calculator_1.3.283_all.click
$ adt-run ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- \
      ssh -s /usr/share/autopkgtest/ssh-setup/adb
[..]
Traceback (most recent call last):
  File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 93, in test_divide_with_infinity_length_result_number
    self._assert_result("0.33333333")
  File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 63, in _assert_result
    self.main_view.get_result, Eventually(Equals(expected_result)))
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 406, in assertThat
    raise mismatch_error
testtools.matchers._impl.MismatchError: After 10.0 seconds test failed: '0.33333333' != '0.3'
Ran 33 tests in 295.586s
FAILED (failures=1)
Note that the current adb ssh setup script deals with some things like applying the autopilot click AppArmor hooks and disabling screen dimming, but it does not do the first-time setup (connecting to network, doing the gesture intro) and unlocking the screen. These are still on the TODO list, but I need to find out how to do these properly. Help appreciated! Click app tests in schroot/containers But, that s not the only thing you can do! autopkgtest has all these other runners, so why not try and run them in a schroot or container? To emulate the environment of an Ubuntu Touch session I wrote a --setup-commands script:
adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
    ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- schroot utopic
This will actually work in the sense of running (and succeeding) the autopilot tests, but it will fail due to a lot of libust[11345/11358]: Error: Error opening shm /lttng-ust-wait... warnings on stderr. I don t know what these mean, just that I also see them on the phone itself occasionally. I also wrote another setup-commands script which emulates read-only apt , so that you can test the unpack only fallback. So you could prepare a container with click and the App framework preinstalled (so that it doesn t always take ages to install them), starting from a standard adt-build-lxc container:
$ sudo lxc-clone -o adt-utopic -n click
$ sudo lxc-start -n click
  # run "sudo apt-get install click ubuntu-sdk-libs ubuntu-app-launch-tools" there
  # then "sudo powerdown"
# current apparmor profile doesn't allow remounting something read-only
$ echo "lxc.aa_profile = unconfined"   sudo tee -a /var/lib/lxc/click/config
Now that container has enough stuff preinstalled to be reasonably fast to set up, and the remaining test dependencies (mostly autopilot) work fine with the unpack/$*_PATH fallback:
$ adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
          --setup-commands /usr/share/autopkgtest/setup-commands/ro-apt \
          ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click \
          --- lxc -es click
This will successfully run all the tests, and provided you have apt-cacher-ng installed, it only takes a few seconds to set up. This might be a nice thing to do on merge proposals, if you don t have an actual phone at hand, or don t want to clutter it up. autopkgtest 3.0.1 will be available in Utopic tomorrow (through autosyncs). If you can t wait to try it out, download it from my people.c.c page . Feedback appreciated!

1 June 2014

Antonio Terceiro: An introduction to the Debian Continuous Integration project

Debian is a big system. At the time of writing, looking at my local package list caches tells me that the unstable suite contains 21306 source packages, and 42867 binary packages on amd64. Between these 42867 binary packages, there is an unthinkable number of inter-package dependencies. For example the dependency graph of the ruby packages contains other 20-something packages. A new version of any of these packages can potentially break some functionality in the ruby package. And that dependency graph is very small. Looking at the dependency graph for, say, the rails package will make your eyes bleed. I tried it here, and GraphViz needed a PNG image with 7653 10003 pixels to draw it. It ain t pretty. Installing rails on a clean Debian system will pull in another 109 packages as part of the dependency chain. Again, as new versions of those packages are uploaded the archive, there is a probability that a backwards-incompatible change, or even a bug fix which was being worked around, might make some funcionality in rails stop working. Even if that probability is low for each package in the dependency chain, with enough packages the probability of any of them causing problems for rails is quite high. And still the rails dependency chain is not that big. libreoffice will pull in another 264 packages. gnome will pull in 1311 dependencies, and kde-full 1320 (!). With a system this big, problems will arrive, and that s a fact of life. As developers, what we can do is try to spot these problems as early as possible, and fixing them in time to make a solid release with the high quality Debian is known for. While automated testing is not the proverbial Silver Bullet of Software Engineering, it is an effective way of finding regressions. Back in 2006, Ian Jackson started the development of autopkgtest as a tool to test Debian packages in their installed form (as opposed to testing packages using their source tree). In 2011, the autopkgtest test suite format was proposed as a standard for the Debian project, in what we now know as the DEP-8 specification. Since then, some maintainers such as myself started experimenting with DEP-8 tests in their packages. There was an expectation in the air that someday, someone would run those tests for the entire archive, and that would be a precious source of QA information. Durign the holiday break last year, I decided to give it a shot. I initially called the codebase dep8. Later I renamed it to debci, since it could potentially also run other other types of test suites in the future. Since early January, ci.debian.net run an instance of debci for the Debian Project. The Debian continuous Integration will trigger tests at most 4 times a day, 3 hours after each dinstall run. It will update a local APT cache and look for packages that declare a DEP-8 test suite. Each package with a test suite will then have its test suite executed if there was any change in its dependency chain since the last test run. Existing test results are published at ci.debian.net every hour, and at the end of each batch a global status is updated. Maintainers can subscribe to a per package Atom feed to keep up with their package test results. People interested in the overall status can subscribe to a global Atom feed of events. Since the introduction of Debian CI in mid-January 2014, we have seen an amazing increase in the number of packages with test suites. We had little less than 200 packages with test suites back then, against around 350 now (early June 2014). The ratio of packages passing passing their test suite has also improved a lot, going from less than 50% to more than 75%. There is documentation available, including a FAQ for package maintainers with further information about how the system works, how to declare test suites in their packages and how to reproduce test runs locally. Also available is development information about debci itself, to those inclined to help improve the system. This is just the beginning. debci is under a good rate of development, and you can expect to see a constant flux of improvements. In special, I would like to mention a few people who are giving amazing contributions to the project:

28 April 2014

Martin Pitt: Booting Ubuntu with systemd: Now in Utopic

Hot on the heels of my previous annoucement of my systemd PPA for trusty, I m now happy to announce that the latest systemd 204-10ubuntu1 just landed in Utopic, after sorting out enough of the current uninstallability in -proposed. The other fixes (bluez, resolvconf, lightdm, etc.) already landed a few days ago. Compared to the PPA these have a lot of other fixes and cleanups, due to the excellent hackfest that we held last weekend. So, upgrade today and let us know about problems in bugs tagged systemd-boot . I think systemd in current utopic works well enough to not break a developer s day to day workflow, so we can now start parallelizing the work of identifying packages which only have upstart jobs and provide corresponding systemd units (or SysV script). Also, this hasn t yet been tested on the phone at all, I m sure that it ll require quite some work (e. g. lxc-android-config has a lot of upstart jobs). To clarify, there is nofixed date/plan/deadline when this will be done, in particular it might well last more than one release cycle. So we ll release (i. e. switch to it as a default) when it s ready :-)

24 April 2014

Martin Pitt: Booting Ubuntu with systemd: Test packages available

On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There s a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it. So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn t accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there s no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don t have an init.d script at all (NM) or it isn t enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn t go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that s now part of the TODO list. So, if you want to help with that, or just test and tell us what s wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do
  sudo add-apt-repository ppa:pitti/systemd
  sudo apt-get update
  sudo apt-get dist-upgrade
This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you ll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd. For the record, if pressing shift doesn t work for you (too fast, VM, or similar), enable the grub menu with
  sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub
  sudo update-grub
Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again. I ll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now. :-) Update: As the comments pointed out, this bricked /etc/resolv.conf. I now uploaded a resolvconf package to the PPA which provides the missing unit (counterpart to the /etc/init/resolvconf.conf upstart job) and this now works fine. If you are in that situation, please boot with upstart, and do the following to clean up:
  sudo rm /etc/resolv.conf
  sudo ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf
Then you can boot back to systemd. Update 2: If you want to help testing, please file bugs with a systemd-boot tag. See the list of known bugs when booting with systemd.

23 March 2014

Gregor Herrmann: RC bugs 2013/49 - 2014/12

since people keep talking to me about my RC bug fixing activities, I thought it might be time again for a short report. to be honest, I mostly stopped my almost daily work at some point in december, partly because the overall number of RC bugs affecting both testing & unstable is quite low (& therefore the number of easy-to-fix bugs), due to the auto-removal policy of the release team (kudos!). but I still kept track about RC bugs I worked on, & here's the list; as you can see, mostly pkg-perl bugs ps: the how-can-i-help package is a nice tool for finding RC bugs in packages you care about. install it if you haven't so far!

26 September 2013

Martin Pitt: How to watch system D-BUS method calls

The current default D-BUS configuration (at least on Ubuntu) disallows monitoring method calls on the system D-BUS (dbus-monitor --system), which makes debugging rather cumbersome; this has worked years ago, but apparently got changed for security reasons. It took me a half an hour to figure out how to enable this for debugging, and as this has annoyingly little Google juice (I didn t find any solution), let s add some. The trick seems to be to set a global policy to be able to eavesdrop any method call after the individual /etc/dbus-1/system.d/*.conf files applied their restrictions, for which there is already a convenient facility. Create a file /etc/dbus-1/system-local.conf with
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE busconfig PUBLIC
  "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
  "http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
<busconfig>
  <policy user="root">
    <!-- Allow everything to be sent -->
    <allow send_destination="*" eavesdrop="true"/>
    <!-- Allow everything to be received -->
    <allow eavesdrop="true"/>
    <allow send_type="method_call"/>
  </policy>
</busconfig>
Then sudo dbus-monitor --system displays everything. Needless to say that you don t want this file on any production system! Does anyone know an easier way? My first naive stab was to run dbus-monitor as root, but that doesn t make any difference at all. Update: Turns out this is already described in a better way at https://wiki.ubuntu.com/DebuggingDBus. Yay me for not finding that.. I updated above recipe to limit access to root, which is much better indeed.

24 May 2013

Martin Pitt: umockdev 0.2.2 released

I did a 0.2.2 maintenance release for umockdev to fix building with Vala 0.16.1, gcc 4.8 (the changed sizeof behaviour caused segfaults), and current udev releases (umockdev-record stumbled over the new link priority fields of udevadm). There are also a couple of bug fixes, but no new features.

11 April 2013

Martin Pitt: New fatrace released, Debian package coming

Paul Wise poked me this morning about uploading fatrace ( file access trace , see the original announcement for details) to Debian, thanks for the reminder! So I filed an Intent To Package, and will upload it in a few days, unless some discussion evolves. I also took the opportunity to do some modernization: The power-usage-report script now uses the current PowerTop 2.x instead of the old 1.13, uses Python 3 now, and includes the process device activity in the report. I released this as 0.5. The actual fatrace binary didn t change its behaviour, it just got some code optimizations; thanks to Yann Droneaud for those.

4 April 2013

Martin Pitt: Urgent PostgreSQL security updates for Debian/Ubuntu

PostgreSQL just released security updates. 9.1 (as found in Debian testing and unstable and Ubuntu 11.10 and later) is affected by a critical remote vulnerability which potentially allows anyone who can access the TCP port (without credentials) to corrupt local files. If your PostgreSQL database exposes the TCP port to any potentially untrusted location, please shut down your servers and update now! PostgreSQL 8.4 for Debian stable (squeeze) and Ubuntu 8.04 LTS and 10.04 LTS also got an update, but these are much less urgent. Debian and Ubuntu advisories for all stable releases, as well as Debian testing are going out as we speak. The updates are already on security.debian.org and security.ubuntu.com. I also uploaded updates for Debian unstable (8.4, 9.1, and 9.2 in experimental) and the Ubuntu backports PPA, but it will take a bit for these to build as we don t have embargoed staging builds for those. Christoph updated the apt.postgresql.org repository as well. Warning: If you use the current Ubuntu raring Beta-2 candidate images, you will still have the old version. So if you do anything serious with those installations, please make sure to upgrade immediately. Update: Debian and Ubuntu security announcements have been sent out, and all packages in the backports PPA are built. Please see the official FAQ if you want to know some more details about the nature of the vulnerabilities.

21 March 2013

Martin Pitt: python-dbusmock 0.6 released

I just pushed out a new python-dbusmock release 0.6. Calling a method on the mock now emits a MethodCalled signal on the org.freedesktop.DBus.Mock interface. In some cases this is easier to track than parsing the mock s log or using GetMethodCalls. Thanks to Lars Uebernickel for this. DBusMockObject.AddTemplate() and DBusTestCase.spawn_server_template() can now load local templates from your own project by specifying a path to a *.py file as template name. Thanks to Lucas De Marchi for this feature. I also wrote a quite comprehensive template for systemd s logind. It stubs out the power management functionality as well as user/seat/session objects, and is convincing enough for loginctl. Some bits like AttachDevice is missing, as this sounds unlikely to be required for D-BUS mock tests, but please let me know if you need anything else. The mock processes now terminate automatically if their connected D-BUS goes down, as advertised in the documentation. You can get the new tarball from Launchpad, and I uploaded it to Debian experimental now. Enjoy!

5 March 2013

Martin Pitt: Automatically generating documentation from GIR files

Many libraries build a GObject introspection repository (*.gir) these days which allows the library to be used from many scripting (Python, JavaScript, Perl, etc.) and other (e. g. Vala) languages without the need for manually writing bindings for each of those. One issue that I hear surprisingly often is there is zero documentation for those bindings . Tools for building documentation out of a .gir have existed for a long time already, just far too many people seem to not know about them. For example, to build Yelp XML documentation out of the libnotify bindings for Python:
  $ g-ir-doc-tool --language=Python -o /tmp/notify-doc /usr/share/gir-1.0/Notify-0.7.gir
Then you can call yelp /tmp/notify-doc to browse the documentation. You can of course also use the standard Mallard tools to convert them to HTML for sticking them on a website:
  $ cd /tmp/notify-doc
  $ yelp-build html .
Admittedly they are far from pretty, and there are still lots of refinements that should be done for the documentation itself (like adding language specific examples) and also for the generated result (prettification, dynamic search, and what not), but it s certainly far from nothign , and a good start. If you are interested in working on this, please show up in #introspection or discuss it on bugzilla, desktop-devel-list@, or the library specific lists/bug trackers.

20 February 2013

Martin Pitt: umockdev 0.2: record/replay input devices

I just released umockdev 0.2. The big new feature of this release is support for evdev ioctls. I. e. you can now record what e. g. X.org is doing to touchpads, touch screens, etc.:
  $ umockdev-record /dev/input/event15 > /tmp/touchpad.umockdev
  # umockdev-record -i /tmp/touchpad.ioctl /dev/input/event15 -- Xorg -logfile /dev/null
and load that back into a testbed with X.org using the dummy driver:
  cat <<EOF > xorg-dummy.conf
  Section "Device"
        Identifier "test"
        Driver "dummy"
  EndSection
  EOF
  $ umockdev-run -l /tmp/touchpad.umockdev -i /dev/input/event15=/tmp/touchpad.ioctl -- \
       Xorg -config xorg-dummy.conf -logfile /tmp/X.log :5
Then e. g. DISPLAY=:5 xinput will recognize the simulated device. Note that Xvfb won t work as that does not use udev for device discovery, but only adds the XTest virtual devices and nothing else, so you need to use the real X.org with the dummy driver to run this as a normal user. This enables easier debugging of new kinds of input devices, as well as writing tests for handling multiple touchscreens/monitors, integration tests of Wacom devices, and so on. This release now also works with older automakes and Vala 0.16, so that you can use this from Ubuntu 12.04 LTS. The daily PPA now also has packages for that. Attention: This version does not work any more with recorded ioctl files from version 0.1. More detailled list of changes:

12 February 2013

Martin Pitt: umockdev: record and mock hardware for debugging and testing

What is this? umockdev is a set of tools and a library to mock hardware devices for programs that handle Linux hardware devices. It also provides tools to record the properties and behaviour of particular devices, and to run a program or test suite under a test bed with the previously recorded devices loaded. This allows developers of software like gphoto or libmtp to receive these records in bug reports and recreate the problem on their system without having access to the affected hardware, as well as writing regression tests for those that do not need any particular privileges and thus are capable of running in standard make check. After working on it for several weeks and lots of rumbling on G+, it s now useful and documented enough for the first release 0.1! Component overview umockdev consists of the following parts: Example: Record and replay PtP/MTP USB devices So how do you use umockdev? For the debug a problem use case you usually don t want to write a program that uses libumockdev, but just use the command line tools. Let s capture some runs from libmtp tools, and replay them in a mock environment: Example: using the library to fake a battery If you want to write regression tests, it s usually more flexible to use the library instead of calling everything through umockdev-run. As a simple example, let s pretend we want to write tests for upower. Batteries, and power supplies in general, are simple devices in the sense that userspace programs such as upower only communicate with them through sysfs and uevents. No /dev nor ioctls are necessary. docs/examples/ has two example programs how to use libumockdev to create a fake battery device, change it to low charge, sending an uevent, and running upower on a local test system D-BUS in the testbed, with watching what happens with upower --monitor-detail. battery.c shows how to do that with plain GObject in C, battery.py is the equivalent program in Python that uses the GI binding. You can just run the latter like this:
  umockdev-wrapper python3 docs/examples/battery.py
and you will see that upowerd (which runs on a temporary local system D-BUS in the test bed) will report a single battery with 75% charge, which gets down to 2.5% a second later. The gist of it is that you create a test bed with
  UMockdevTestbed *testbed = umockdev_testbed_new ();
and add a device with certain sysfs attributes and udev properties with
    gchar *sys_bat = umockdev_testbed_add_device (
            testbed, "power_supply", "fakeBAT0", NULL,
            /* attributes */
            "type", "Battery",
            "present", "1",
            "status", "Discharging",
            "energy_full", "60000000",
            "energy_full_design", "80000000",
            "energy_now", "48000000",
            "voltage_now", "12000000",
            NULL,
            /* properties */
            "POWER_SUPPLY_ONLINE", "1",
            NULL);
You can then e. g. change an attribute and synthesize a change uevent with
  umockdev_testbed_set_attribute (testbed, sys_bat, "energy_now", "1500000");
  umockdev_testbed_uevent (testbed, sys_bat, "change");
With Python or other introspected languages, or in Vala it works the same way, except that it looks a bit leaner due to proper object semantics. Packages I have a packaging branch for Ubuntu and a recipe to do daily builds with the latest upstream code into my daily builds PPA (for 12.10 and raring). I will soon upload it to Raring proper, too. What s next? The current set of features should already get you quite far for a range of devices. I d love to get feedback from you if you use this for anything useful, in particular how to improve the API, the command line tools, or the text dump format. I m not really happy with the split between umockdev (sys/dev) and ioctl files and the relatively complicated CLI syntax of umockdev-record, so any suggestion is welcome. One use case that I have for myself is to extend the coverage of ioctls for input devices such as touch screens and wacom tablets, so that we can write some tests for gnome-settings-daemon plugins. I also want to find a way to pass ioctls back to the test suite/calling program instead of having to handle them all in the preload library, which would make it a lot more flexible. However, due to the nature of the ioctl ABI this is not easy. Where to go to The code is hosted on github in the umockdev project; this started out as a systemd branch to add this functionality to libudev, but after a discussion with Kay we decided to keep it separate. But I kept it in git anyway, given how popular it is today. For the bzr lovers, Launchpad has an import at lp:umockdev. Release tarballs will be on Launchpad as well. Please file bugs and enhancement requests in the git hub tracker. Finally, if you have questions or want to discuss something, you can always find me on IRC (pitti on Freenode or GNOME). Thanks for your attention and happy testing!

Next.

Previous.